Goto

Collaborating Authors

 attractive graphical model


Export Reviews, Discussions, Author Feedback and Meta-Reviews

Neural Information Processing Systems

We thank the reviewers for their helpful comments and questions. We will implement all their valuable suggestions for improving the presentation and the readability of the text and the figures. For clarity, the practical contributions of this work can be summarized as follows. We present a general scheme for approximate inference in pairwise binary graphical models. The scheme requires constructing an "attractive" 2-cover of the base graph and performing belief propagation on this cover.


Loop Series and Bethe Variational Bounds in Attractive Graphical Models

Neural Information Processing Systems

Variational methods are frequently used to approximate or bound the partition or likelihood function of a Markov random field. Methods based on mean field theory are guaranteed to provide lower bounds, whereas certain types of convex relaxations provide upper bounds. In general, loopy belief propagation (BP) provides (often accurate) approximations, but not bounds. We prove that for a class of attractive binary models, the value specified by any fixed point of loopy BP always provides a lower bound on the true likelihood. Empirically, this bound is much better than the naive mean field bound, and requires no further work than running BP.


Loop Series and Bethe Variational Bounds in Attractive Graphical Models

Willsky, Alan S., Sudderth, Erik B., Wainwright, Martin J.

Neural Information Processing Systems

Variational methods are frequently used to approximate or bound the partition or likelihood function of a Markov random field. Methods based on mean field theory are guaranteed to provide lower bounds, whereas certain types of convex relaxations provide upper bounds. In general, loopy belief propagation (BP) provides (often accurate) approximations, but not bounds. We prove that for a class of attractive binary models, the value specified by any fixed point of loopy BP always provides a lower bound on the true likelihood. Empirically, this bound is much better than the naive mean field bound, and requires no further work than running BP.